645 research outputs found
Leveraging Benchmarking Data for Informed One-Shot Dynamic Algorithm Selection
A key challenge in the application of evolutionary algorithms in practice is
the selection of an algorithm instance that best suits the problem at hand.
What complicates this decision further is that different algorithms may be best
suited for different stages of the optimization process. Dynamic algorithm
selection and configuration are therefore well-researched topics in
evolutionary computation. However, while hyper-heuristics and parameter control
studies typically assume a setting in which the algorithm needs to be chosen
while running the algorithms, without prior information, AutoML approaches such
as hyper-parameter tuning and automated algorithm configuration assume the
possibility of evaluating different configurations before making a final
recommendation. In practice, however, we are often in a middle-ground between
these two settings, where we need to decide on the algorithm instance before
the run ("oneshot" setting), but where we have (possibly lots of) data
available on which we can base an informed decision.
We analyze in this work how such prior performance data can be used to infer
informed dynamic algorithm selection schemes for the solution of pseudo-Boolean
optimization problems. Our specific use-case considers a family of genetic
algorithms.Comment: Submitted for review to GECCO'2
Automated Configuration of Genetic Algorithms by Tuning for Anytime Performance
Finding the best configuration of algorithms' hyperparameters for a given
optimization problem is an important task in evolutionary computation. We
compare in this work the results of four different hyperparameter tuning
approaches for a family of genetic algorithms on 25 diverse pseudo-Boolean
optimization problems. More precisely, we compare previously obtained results
from a grid search with those obtained from three automated configuration
techniques: iterated racing, mixed-integer parallel efficient global
optimization, and mixed-integer evolutionary strategies.
Using two different cost metrics, expected running time and the area under
the empirical cumulative distribution function curve, we find that in several
cases the best configurations with respect to expected running time are
obtained when using the area under the empirical cumulative distribution
function curve as the cost metric during the configuration process. Our results
suggest that even when interested in expected running time performance, it
might be preferable to use anytime performance measures for the configuration
task. We also observe that tuning for expected running time is much more
sensitive with respect to the budget that is allocated to the target
algorithms
Benchmarking a Genetic Algorithm with Configurable Crossover Probability
We investigate a family of Genetic Algorithms (GAs) which
creates offspring either from mutation or by recombining two randomly chosen
parents. By scaling the crossover probability, we can thus interpolate from a
fully mutation-only algorithm towards a fully crossover-based GA. We analyze,
by empirical means, how the performance depends on the interplay of population
size and the crossover probability.
Our comparison on 25 pseudo-Boolean optimization problems reveals an
advantage of crossover-based configurations on several easy optimization tasks,
whereas the picture for more complex optimization problems is rather mixed.
Moreover, we observe that the ``fast'' mutation scheme with its are power-law
distributed mutation strengths outperforms standard bit mutation on complex
optimization tasks when it is combined with crossover, but performs worse in
the absence of crossover.
We then take a closer look at the surprisingly good performance of the
crossover-based GAs on the well-known LeadingOnes benchmark
problem. We observe that the optimal crossover probability increases with
increasing population size . At the same time, it decreases with
increasing problem dimension, indicating that the advantages of the crossover
are not visible in the asymptotic view classically applied in runtime analysis.
We therefore argue that a mathematical investigation for fixed dimensions might
help us observe effects which are not visible when focusing exclusively on
asymptotic performance bounds
IOHanalyzer: Performance Analysis for Iterative Optimization Heuristic
Benchmarking and performance analysis play an important role in understanding
the behaviour of iterative optimization heuristics (IOHs) such as local search
algorithms, genetic and evolutionary algorithms, Bayesian optimization
algorithms, etc. This task, however, involves manual setup, execution, and
analysis of the experiment on an individual basis, which is laborious and can
be mitigated by a generic and well-designed platform. For this purpose, we
propose IOHanalyzer, a new user-friendly tool for the analysis, comparison, and
visualization of performance data of IOHs.
Implemented in R and C++, IOHanalyzer is fully open source. It is available
on CRAN and GitHub. IOHanalyzer provides detailed statistics about fixed-target
running times and about fixed-budget performance of the benchmarked algorithms
on real-valued, single-objective optimization tasks. Performance aggregation
over several benchmark problems is possible, for example in the form of
empirical cumulative distribution functions. Key advantages of IOHanalyzer over
other performance analysis packages are its highly interactive design, which
allows users to specify the performance measures, ranges, and granularity that
are most useful for their experiments, and the possibility to analyze not only
performance traces, but also the evolution of dynamic state parameters.
IOHanalyzer can directly process performance data from the main benchmarking
platforms, including the COCO platform, Nevergrad, and our own IOHexperimenter.
An R programming interface is provided for users preferring to have a finer
control over the implemented functionalities
When to be Discrete: Analyzing Algorithm Performance on Discretized Continuous Problems
The domain of an optimization problem is seen as one of its most important
characteristics. In particular, the distinction between continuous and discrete
optimization is rather impactful. Based on this, the optimizing algorithm,
analyzing method, and more are specified. However, in practice, no problem is
ever truly continuous. Whether this is caused by computing limits or more
tangible properties of the problem, most variables have a finite resolution.
In this work, we use the notion of the resolution of continuous variables to
discretize problems from the continuous domain. We explore how the resolution
impacts the performance of continuous optimization algorithms. Through a
mapping to integer space, we are able to compare these continuous optimizers to
discrete algorithms on the exact same problems. We show that the standard
-CMA-ES fails when discretization is added to the problem
- …